OpenAI cracks down on Israeli "for-hire threat actor" Stoic
OpenAI cracks down on Israeli "for-hire threat actor" Stoic
OpenAI announced that it thwarted five influence operations in the past three months, including efforts by the Israeli company Stoic, labeled as a "for-hire threat actor," which targeted audiences in North America and Israel.
OpenAI identified and disrupted five covert influence operations attempting to use its generative AI (GenAI) models to influence political proceedings in various countries. Among them was Stoic, an Israeli company targeting audiences in Canada, the U.S., Israel, India, and Ghana. This information comes from a report published by OpenAI on Friday. Stoic's activity was also highlighted in Meta's quarterly threat report released last week.
The report states, "Our investigations showed that, while the actors behind these operations sought to generate content or increase productivity using our models, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of their use of our services."
Since launching its DALL-E image generator and ChatGPT chatbot in 2022, OpenAI has faced concerns about these tools being used to influence political proceedings or conduct cyber attacks. In February, OpenAI and Microsoft published a report detailing how cybercriminals and hackers use ChatGPT to optimize cyber attacks by gathering intelligence on various systems and entities.
The latest report from OpenAI highlights how private and state actors attempted to leverage its products for influence campaigns. Apart from the Israeli company, the report identified activity from Russia aimed at Telegram users in Ukraine, Moldova, the Baltic states, and the USA; separate Russian activity targeting Ukrainian users; a Chinese threat actor spreading pro-China messages and criticism of its critics; and an Iranian actor posting content supporting Iran and criticizing Israel and the USA.
OpenAI rated the impact of these campaigns on a scale of 1 (lowest) to 6 (highest), with none of the identified activities scoring higher than 2. According to the report, OpenAI disrupted these campaigns and blocked the parties involved from accessing its products.
The report identifies four common trends among the campaigns:
1. Content creation: All actors used AI models to create content, mainly text and sometimes images. Some aimed to improve the quality of their output by reducing linguistic errors, while others focused on producing large volumes of short responses published on third-party platforms. However, this did not translate into increased activity among authentic users.
2. Mix of old and new: The promotions used AI-generated content alongside traditional formats, such as hand-written text or memes copied from the internet.
3. Falsification of engagement activity: Some campaigns used AI models to simulate social media activity, such as creating fake comments on posts, which is against OpenAI's usage policy.
4. Increasing output: Threat actors used AI models to increase their output, including generating tags for websites and analyzing social media sentiments.
The report includes detailed references to each identified operation. The Israeli company Stoic is described as a "for-hire threat actor," publishing anti-Hamas, anti-Qatari, pro-Israel, anti-BJP (the ruling party in India), and pro-Histadrut content.
"We banned a cluster of accounts operated from Israel that were being used to generate and edit content for an influence operation that spanned X, Facebook, Instagram, websites, and YouTube. The network was operated by STOIC, a political campaign management firm in Israel," the report said.
The company's activity included content in English and Hebrew that addressed audiences in Canada, the USA and Israel, and from the beginning of May also in India. In addition, it was prepared to launch an influence campaign aimed at users in Ghana. "The operation also used our models to create fictional personas and bios for social media based on certain variables such as age, gender and location, and to conduct research into people in Israel who commented publicly on the Histadrut trade union in Israel. Our models refused to supply personal information in response to these prompts," the report said.
The content created was published on Facebook, Instagram and X (Meta removed the accounts that published this content on its platforms, according to the report it published this week), and on websites posing as activist groups focused on Gaza and Jewish-Muslim relations. "In some cases, we identified this operation’s fake accounts commenting on social-media posts made by the operation itself, likely in an attempt to create the impression of audience engagement," the report states.
The content itself published by Stoic was divided into several thematic campaigns, most of which were loosely related to Gaza, and in May also to the Indian elections 24 hours before the polls opened. A campaign in Canada criticized "extremist Islamists", a campaign in the U.S. accused pro-Palestinian demonstrators at universities of promoting anti-Semitism and terrorism; another campaign criticized UNRWA; A fourth campaign focused on Qatar and presented its investments in the U.S. as a threat to the American way of life; a fifth created posts in Hebrew for social media that praised the Histadrut and its leadership; and the campaign in India criticized the ruling BJP party and praised the opposition Congress party.
According to OpenAI, these efforts generated "little, if any" user activity, except for that from Stoic's own accounts. The company rated the impact of its activity at level 2 in its index.
As mentioned, this is the second time this week that this company stars in a report by an international technology giant. In a report published by Meta, the company said it had removed a network of hundreds of Facebook and Instagram accounts that operated from Israel and launched an influence campaign targeting users in the U.S. and Canada.